skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Stavropoulos, Konstantinos"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Traditional models of supervised learning require a learner, given examples from an arbitrary joint distribution on š‘… š‘‘ Ɨ { ± 1 } R d Ɨ{±1}, to output a hypothesis that competes (to within šœ– ϵ) with the best fitting concept from a class. To overcome hardness results for learning even simple concept classes, this paper introduces a smoothed-analysis framework that only requires competition with the best classifier robust to small random Gaussian perturbations. This subtle shift enables a wide array of learning results for any concept that (1) depends on a low-dimensional subspace (multi-index model) and (2) has bounded Gaussian surface area. This class includes functions of halfspaces and low-dimensional convex sets, which are only known to be learnable in non-smoothed settings with respect to highly structured distributions like Gaussians. The analysis also yields new results for traditional non-smoothed frameworks such as learning with margin. In particular, the authors present the first algorithm for agnostically learning intersections of š‘˜ k-halfspaces in time š‘˜ ā‹… poly ( log ⁔ š‘˜ , šœ– , š›¾ ) kā‹…poly(logk,ϵ,γ), where š›¾ γ is the margin parameter. Previously, the best-known runtime was exponential in š‘˜ k (Arriaga and Vempala, 1999). 
    more » « less
    Free, publicly-accessible full text available April 30, 2026
  2. We study the problem of PAC learning γ-margin halfspaces with Massart noise. We propose a simple proper learning algorithm, the Perspectron, that has sample complexity O˜((ϵγ)āˆ’2) and achieves classification error at most Ī·+ϵ where Ī· is the Massart noise rate. Prior works [DGT19,CKMY20] came with worse sample complexity guarantees (in both ϵ and γ) or could only handle random classification noise [DDK+23,KIT+23] -- a much milder noise assumption. We also show that our results extend to the more challenging setting of learning generalized linear models with a known link function under Massart noise, achieving a similar sample complexity to the halfspace case. This significantly improves upon the prior state-of-the-art in this setting due to [CKMY20], who introduced this model. 
    more » « less
    Free, publicly-accessible full text available January 16, 2026
  3. We study the problem of learning under arbitrary distribution shift, where the learner is trained on a labeled set from one distribution but evaluated on a different, potentially adversarially generated test distribution. We focus on two frameworks: PQ learning [GKKM'20], allowing abstention on adversarially generated parts of the test distribution, and TDS learning [KSV'23], permitting abstention on the entire test distribution if distribution shift is detected. All prior known algorithms either rely on learning primitives that are computationally hard even for simple function classes, or end up abstaining entirely even in the presence of a tiny amount of distribution shift. We address both these challenges for natural function classes, including intersections of halfspaces and decision trees, and standard training distributions, including Gaussians. For PQ learning, we give efficient learning algorithms, while for TDS learning, our algorithms can tolerate moderate amounts of distribution shift. At the core of our approach is an improved analysis of spectral outlier-removal techniques from learning with nasty noise. Our analysis can (1) handle arbitrarily large fraction of outliers, which is crucial for handling arbitrary distribution shifts, and (2) obtain stronger bounds on polynomial moments of the distribution after outlier removal, yielding new insights into polynomial regression under distribution shifts. Lastly, our techniques lead to novel results for tolerant testable learning [RV'23], and learning with nasty noise. 
    more » « less
    Free, publicly-accessible full text available December 10, 2025
  4. A fundamental notion of distance between train and test distributions from the field of domain adaptation is discrepancy distance. While in general hard to compute, here we provide the first set of provably efficient algorithms for testing localized discrepancy distance, where discrepancy is computed with respect to a fixed output classifier. These results imply a broad set of new, efficient learning algorithms in the recently introduced model of Testable Learning with Distribution Shift (TDS learning) due to Klivans et al. (2023).Our approach generalizes and improves all prior work on TDS learning: (1) we obtain universal learners that succeed simultaneously for large classes of test distributions, (2) achieve near-optimal error rates, and (3) give exponential improvements for constant depth circuits. Our methods further extend to semi-parametric settings and imply the first positive results for low-dimensional convex sets. Additionally, we separate learning and testing phases and obtain algorithms that run in fully polynomial time at test time. 
    more » « less
    Free, publicly-accessible full text available December 10, 2025
  5. We revisit the fundamental problem of learning with distribution shift, in which a learner is given labeled samples from training distribution D, unlabeled samples from test distribution D’ and is asked to output a classifier with low test error. The standard approach in this setting is to bound the loss of a classifier in terms of some notion of distance between D and D’. These distances, however, seem difficult to compute and do not lead to efficient algorithms. We depart from this paradigm and define a new model called testable learning with distribution shift, where we can obtain provably efficient algorithms for certifying the performance of a classifier on a test distribution. In this model, a learner outputs a classifier with low test error whenever samples from D and D’ pass an associated test; moreover, the test must accept (with high probability) if the marginal of D equals the marginal of D’. We give several positive results for learning well-studied concept classes such as halfspaces, intersections of halfspaces, and decision trees when the marginal of D is Gaussian or uniform on the hypercube. Prior to our work, no efficient algorithms for these basic cases were known without strong assumptions on D’. For halfspaces in the realizable case (where there exists a halfspace consistent with both D and D’), we combine a moment-matching approach with ideas from active learning to simulate an efficient oracle for estimating disagreement regions. To extend to the non-realizable setting, we apply recent work from testable (agnostic) learning. More generally, we prove that any function class with low-degree L2-sandwiching polynomial approximators can be learned in our model. Since we require L2- sandwiching (instead of the usual L1 loss), we cannot directly appeal to convex duality and instead apply constructions from the pseudorandomness literature to obtain the required approximators. We also provide lower bounds to show that the guarantees we obtain on the performance of our output hypotheses are best possible up to constant factors, as well as a separation showing that realizable learning in our model is incomparable to (ordinary) agnostic learning. 
    more » « less
  6. In the well-studied agnostic model of learning, the goal of a learner– given examples from an arbitrary joint distribution – is to output a hypothesis that is competitive (to within šœ–) of the best fitting concept from some class. In order to escape strong hardness results for learning even simple concept classes in this model, we introduce a smoothed analysis framework where we require a learner to compete only with the best classifier that is robust to small random Gaussian perturbation. This subtle change allows us to give a wide array of learning results for any concept that (1) depends on a low-dimensional subspace (aka multi-index model) and (2) has a bounded Gaussian surface area. This class includes functions of halfspaces and (low-dimensional) convex sets, cases that are only known to be learnable in non-smoothed settings with respect to highly structured distributions such as Gaussians. Perhaps surprisingly, our analysis also yields new results for traditional non-smoothed frameworks such as learning with margin. In particular, we obtain the first algorithm for agnostically learning intersections of š‘˜ -halfspaces in time š‘˜\poly(logš‘˜šœ–š›¾) where š›¾ is the margin parameter. Before our work, the best-known runtime was exponential in š‘˜ (Arriaga and Vempala, 1999). 
    more » « less
  7. This paper addresses the challenge of learning under arbitrary distribution shift, where models are trained on data from one distribution but evaluated on a potentially adversarially generated test distribution. The authors study two frameworks: PQ learning [Goldwasser, A. Kalai, Y. Kalai, Montasser NeurIPS 2020], which allows abstention on adversarial portions of the test distribution, and TDS learning [Klivans, Stavropoulos, Vasilyan COLT 2024], which permits abstention on the entire test set if a distribution shift is detected. Previous algorithms either relied on computationally hard primitives (even for simple classes) or abstained entirely with even small shifts. This work develops efficient algorithms for natural function classes such as intersections of halfspaces and decision trees, under standard training distributions like Gaussians. The key innovation is an improved analysis of spectral outlier-removal techniques from learning with nasty noise, enabling: (1) handling arbitrarily large fractions of outliers, crucial for arbitrary shifts, and (2) stronger bounds on polynomial moments post outlier-removal, yielding new insights into polynomial regression under distribution shift. The techniques also produce novel results for tolerant testable learning [Rubinfeld and Vasilyan STOC 2023] and learning with nasty noise. 
    more » « less
  8. This paper investigates the problem of computing discrepancy distance, a key notion of distance between training and test distributions in domain adaptation. While computing discrepancy distance is generally hard, the authors present the first provably efficient algorithms for testing localized discrepancy distance, where the measure is computed with respect to a fixed output classifier. These results lead to a new family of efficient learning algorithms under the recently introduced Testable Learning with Distribution Shift (TDS learning) framework (Klivans et al., 2023). The authors’ contributions include: (1) universal learners that succeed simultaneously across a wide range of test distributions, (2) algorithms achieving near-optimal error rates, and (3) exponential improvements for constant-depth circuits. Their methods also extend to semi-parametric settings and yield the first positive results for low-dimensional convex sets. Furthermore, by separating learning and testing phases, the authors provide algorithms that run in fully polynomial time at test time. 
    more » « less
  9. We give the first efficient algorithm for learning halfspaces in the testable learning model recently defined by Rubinfeld and Vasilyan [2022]. In this model, a learner certifies that the accuracy of its output hypothesis is near optimal whenever the training set passes an associated test, and training sets drawn from some target distribution must pass the test. This model is more challenging than distribution-specific agnostic or Massart noise models where the learner is allowed to fail arbitrarily if the distributional assumption does not hold. We consider the setting where the target distribution is the standard Gaussian in dimensions and the label noise is either Massart or adversarial (agnostic). For Massart noise, our tester-learner runs in polynomial time and outputs a hypothesis with (information-theoretically optimal) error (and extends to any fixed strongly log-concave target distribution). For adversarial noise, our tester-learner obtains error in polynomial time. Prior work on testable learning ignores the labels in the training set and checks that the empirical moments of the covariates are close to the moments of the base distribution. Here we develop new tests of independent interest that make critical use of the labels and combine them with the moment-matching approach of Gollakota et al. [2022]. This enables us to implement a testable variant of the algorithm of Diakonikolas et al. [2020a, 2020b] for learning noisy halfspaces using nonconvex SGD. 
    more » « less